Goto

Collaborating Authors

 laplacian eigenmap


Total Variation Classes Beyond 1d: Minimax Rates, and the Limitations of Linear Smoothers

Neural Information Processing Systems

We consider the problem of estimating a function defined over $n$ locations on a $d$-dimensional grid (having all side lengths equal to $n^{1/d}$). When the function is constrained to have discrete total variation bounded by $C_n$, we derive the minimax optimal (squared) $\ell_2$ estimation error rate, parametrized by $n, C_n$. Total variation denoising, also known as the fused lasso, is seen to be rate optimal. Several simpler estimators exist, such as Laplacian smoothing and Laplacian eigenmaps. A natural question is: can these simpler estimators perform just as well?




A Proofs

Neural Information Processing Systems

A.2 Proof of proposition 1 Let Pœ{B,D,E}, k be a valid kernel (assumptions of theorem 1) with K Inversion of conditional with Bayes rule gives: 'W œS As a complement, we now explicit the simple forms taken by the posterior limit graph in each case. A.3 Proof of theorem 2 We consider the following hierarchical model, for Nonetheless it can be simplified as we now show. We focus on finding the optimal eigenvectors first. Only the left term in (18) depends on R. The optimization problem for eigenvectors writes: min tr! Note that the identity permutation i.e. for i œ [n], (i) =i is optimal in this case as the ( We will choose this U in what follows as the sign of the axes do not influence the characterization of the final result in Z as a PCA embedding. Note that this solution is not unique if there are repeated eigenvalues.



Total Variation Classes Beyond 1d: Minimax Rates, and the Limitations of Linear Smoothers

Neural Information Processing Systems

We consider the problem of estimating a function defined over $n$ locations on a $d$-dimensional grid (having all side lengths equal to $n^{1/d}$). When the function is constrained to have discrete total variation bounded by $C_n$, we derive the minimax optimal (squared) $\ell_2$ estimation error rate, parametrized by $n, C_n$. Total variation denoising, also known as the fused lasso, is seen to be rate optimal. Several simpler estimators exist, such as Laplacian smoothing and Laplacian eigenmaps. A natural question is: can these simpler estimators perform just as well?





On Differentially Private Graph Sparsification and Applications

Raman Arora, Jalaj Upadhyay

Neural Information Processing Systems

In this paper, we study private sparsification of graphs. In particular, we give an algorithm that given an input graph, returns a sparse graph which approximates the spectrum of the input graph while ensuring differential privacy.